13 research outputs found

    An asynchronous method for cloud-based rendering

    Get PDF
    Interactive high-fidelity rendering is still unachievable on many consumer devices. Cloud gaming services have shown promise in delivering interactive graphics beyond the individual capabilities of user devices. However, a number of shortcomings are manifest in these systems: high network bandwidths are required for higher resolutions and input lag due to network fluctuations heavily disrupts user experience. In this paper, we present a scalable solution for interactive high-fidelity graphics based on a distributed rendering pipeline where direct lighting is computed on the client device and indirect lighting in the cloud. The client device keeps a local cache for indirect lighting which is asynchronously updated using an object space representation; this allows us to achieve interactive rates that are unconstrained by network performance for a wide range of display resolutions that are also robust to input lag. Furthermore, in multi-user environments, the computation of indirect lighting is amortised over participating clients

    High-fidelity graphics using unconventional distributed rendering approaches

    Get PDF
    High-fidelity rendering requires a substantial amount of computational resources to accurately simulate lighting in virtual environments. While desktop computing, with the aid of modern graphics hardware, has shown promise in delivering realistic rendering at interactive rates, real-time rendering of moderately complex scenes is still unachievable on the majority of desktop machines and the vast plethora of mobile computing devices that have recently become commonplace. This work provides a wide range of computing devices with high-fidelity rendering capabilities via oft-unused distributed computing paradigms. It speeds up the rendering process on formerly capable devices and provides full functionality to incapable devices. Novel scheduling and rendering algorithms have been designed to best take advantage of the characteristics of these systems and demonstrate the efficacy of such distributed methods. The first is a novel system that provides multiple clients with parallel resources for rendering a single task, and adapts in real-time to the number of concurrent requests. The second is a distributed algorithm for the remote asynchronous computation of the indirect diffuse component, which is merged with locally-computed direct lighting for a full global illumination solution. The third is a method for precomputing indirect lighting information for dynamically-generated multi-user environments by using the aggregated resources of the clients themselves. The fourth is a novel peer-to-peer system for improving the rendering performance in multi-user environments through the sharing of computation results, propagated via a mechanism based on epidemiology. The results demonstrate that the boundaries of the distributed computing typically used for computer graphics can be significantly and successfully expanded by adapting alternative distributed methods

    Remote and scalable interactive high-fidelity graphics using asynchronous computation

    Get PDF
    Current computing devices span a large and varied range of computational power. Interactive high-fidelity graphics is still unachievable on many of the devices widely available to the public, such as desktops and laptops without high-end dedicated graphics cards, tablets and mobile phones. In this paper we present a scalable solution for interactive high-fidelity graphics with global illumination in the cloud. Specifically, we introduce a novel method for the asynchronous remote computation of indirect lighting that is both scalable and efficient. A lightweight client implementation merges the remotely computed indirect contribution with locally computed direct lighting for a full global illumination solution. The approach proposed in this paper applies instant radiosity methods to a precomputed point cloud representation of the scene; an equivalent structure on the client side is updated on demand, and used to reconstruct the indirect contribution. This method can be deployed on platforms of varying computational power, from tablets to high-end desktops and video game consoles. Furthermore, the same dynamic GI solution computed on the cloud can be used concurrently with multiple clients sharing a virtual environment with minimal overheads.peer-reviewe

    Collaborative rendering over peer-to-peer networks

    Get PDF
    Physically-based high-fidelity rendering pervades areas like engineering, architecture, archaeology and defence, amongs others. The computationally intensive algorithms required for such visualisation benefit greatly from added computational resources when exploiting parallelism. In scenarios where multiple users roam around the same virtual scene, and possibly interact with one another, complex visualisation of phenomena like global illumination are traditionally computed and duplicated at each and every client, or centralised and computed at a single very powerful server. In this paper, we introduce the concept of collaborative high-fidelity rendering over peer-to-peer networks, which aims to reduce redundant computation via collaboration in an environment where client machines are volatile and may join or leave the network at any time.peer-reviewe

    Rendering as a service

    Get PDF
    High-fidelity rendering requires a substantial amount of computational resources to accurately simulate lighting in virtual environments. While desktop computing, boosted by modern graphics hardware has shown promise in delivering realistic rendering at interactive rates, rendering moderately complex scenes may still elude single machine systems. Moreover, with the increasing adoption of mobile devices, which are incapable of achieving the same computational performance, there is certainly a need for access to further computational resources that would be able to guarantee a certain level of quality.peer-reviewe

    Point cloud segmentation for cultural heritage sites

    No full text
    Over the past few years, the acquisition of 3D point information representing the structure of real-world objects has become common practice in many areas. This is particularly true in the Cultural Heritage (CH) domain, where point clouds reproducing important and usually unique artifacts and sites of various sizes and geometric complexities are acquired. Specialized software is then usually used to process and organise this data. This paper addresses the problem of automatically organising this raw data by segmenting point clouds into meaningful subsets. This organisation over raw data entails a reduction in complexity and facilitates the post-processing effort required to work with the individual objects in the scene. This paper describes an efficient two-stage segmentation algorithm which is able to automatically partition raw point clouds. Following an intial partitioning of the point cloud, a RanSaC-based plane fitting algorithm is used in order to add a further layer of abstraction. A number of potential uses of the newly processed point cloud are presented; one of which is object extraction using point cloud queries. Our method is demonstrated on three point clouds ranging from 600K to 1.9M points. One of these point clouds was acquired from the pre-historic temple of Mnajdra consistsing of multiple adjacent complex structures

    A System of Self-Correction

    No full text

    Cloud-Based Dynamic GI for Shared VR Experiences

    No full text

    HLTB design for high-speed multi-FPGA pipelines

    No full text
    This paper presents the design and implementation of a high-level test bench for high-speed multi-FPGA pipelines, to model and simulate architectures that gather and process large amounts of data. The test bench was successfully employed in a nuclear particle detector system, forming part of a large physics experiment. The design under test consists of three main stages. The first stage simulates the acquisition of the analog input data, providing designers with a means to verify correct operation with unlimited input variation, be it actual or generated data. The second stage, which contains multiple hierarchies of FPGAs, comprises the actual detector firmware design. The last stage is divided into two modules: data acquisition and triggering, which are based on non-synthesizable VHDL features. The simulated system has been verified against the provided technical documentation. Each module was individually tested; subsequently, integration testing of the entire pipeline was carried out to ascertain its physical correctness across design corners. The upfront costs in terms of time and resources required to set up the environment are outweighed by the benefits of having such a system, which range from the scalability, predictability and manageability of modular systems to overcome the associated limitations of high-speed synthesis and instrumentation. Hence, factoring high-level test benches in the design pipeline becomes not just an asset but an invaluable tool for the optimization, testing and verification of complex high-speed designs

    EDITORIAL COMMENTS

    No full text
    corecore